3 research outputs found

    Improving Performance and Flexibility of Fabric-Attached Memory Systems

    Get PDF
    As demands for memory-intensive applications continue to grow, the memory capacity of each computing node is expected to grow at a similar pace. In high-performance computing (HPC) systems, the memory capacity per compute node is decided upon the most demanding application that would likely run on such a system, and hence the average capacity per node in future HPC systems is expected to grow significantly. However, diverse applications run on HPC systems with different memory requirements and memory utilization can fluctuate widely from one application to another. Since memory modules are private for a corresponding computing node, a large percentage of the overall memory capacity will likely be underutilized, especially when there are many jobs with small memory footprints. Thus, as HPC systems are moving towards the exascale era, better utilization of memory is strongly desired. Moreover, as new memory technologies come on the market, the flexibility of upgrading memory and system updates becomes a major concern since memory modules are tightly coupled with the computing nodes. To address these issues, vendors are exploring fabric-attached memories (FAM) systems. In this type of system, resources are decoupled and are maintained independently. Such a design has driven technology providers to develop new protocols, such as cache-coherent interconnects and memory semantic fabrics, to connect various discrete resources and help users leverage advances in-memory technologies to satisfy growing memory and storage demands. Using these new protocols, FAM can be directly attached to a system interconnect and be easily integrated with a variety of processing elements (PEs). Moreover, systems that support FAM can be smoothly upgraded and allow multiple PEs to share the FAM memory pools using well-defined protocols. The sharing of FAM between PEs allows efficient data sharing, improves memory utilization, reduces cost by allowing flexible integration of different PEs and memory modules from several vendors, and makes it easier to upgrade the system. However, adopting FAM in HPC systems brings in new challenges. Since memory is disaggregated and is accessed through fabric networks, latency in accessing memory (efficiency) is a crucial concern. In addition, quality of service, security from neighbor nodes, coherency, and address translation overhead to access FAM are some of the problems that require rethinking for FAM systems. To this end, we study and discuss various challenges that need to be addressed in FAM systems. Firstly, we developed a simulating environment to mimic and analyze FAM systems. Further, we showcase our work in addressing the challenges to improve the performance and increase the feasibility of such systems; enforcing quality of service, providing page migration support, and enhancing security from malicious neighbor nodes

    Prediction Based Opportunistic Routing For Maritime Search And Rescue Wireless Sensor Network

    No full text
    In recent years, maritime and air crashes occur frequently. The existing rescue measures rely only on board satellite communications equipment, which makes it difficult to confirm the accurate positioning information and vital signs of drowning people. Recently, wireless sensor networks (WSN) are introduced to Maritime Search and Rescue (MSR). WSNs feature quick expansion, self-organization, and self-adaptation to the marine environment. However, the constant changing nodes location and link reliability in marine search and rescue WSN makes the routing metrics between nodes highly dynamic. Traditional routing protocols such as AODV that establish a fixed single route based on static nodes information will provide poor packet delivery rate and take no consideration of the limited energy on the irreplaceable WSN nodes. We propose to employ opportunistic routing which can make best use of the broadcast property of radio propagation. The forwarding decisions in opportunistic routing are only based on its neighbor\u27s information. No network-wide flooding is required to establish routes. In order to maintain the latest neighbor information and minimize the energy cost of collecting these information, we propose a light-weight time series based routing metric prediction method to deal with the high communication cost incurred by collecting the latest routing metrics between nodes. Results: Our implementation of opportunistic routing protocol achieved 30% more Packet Delivery Ratio compared to the traditional AODV protocol. Also opportunistic routing protocol with prediction performed slightly better than opportunistic routing protocol without prediction. Our approach generated 90% efficiency where as 60% efficiency was achieved using AODV protocol. In achieving this an additional 3% energy is consumed by the nodes. We feel additional 3% energy consumption to improve delivery greatly by 30% is a good tradeoff
    corecore